深度神经网络在关键视觉挑战(例如对象识别)中超过了人类的表现,但需要大量的能量,计算和记忆。相反,尖峰神经网络(SNN)具有提高对象识别系统的效率和生物学合理性的潜力。在这里,我们提出了一种SNN模型,该模型使用Spike-Latency编码和赢家全部抑制(WTA-I)有效地表示时尚MNIST数据集的视觉刺激。将刺激用中心旋转的接受场进行预处理,然后喂入一层尖刺神经元,其突触权重使用Spike-Timing依赖性塑性(STDP)进行更新。我们研究了代表对象的质量如何在不同的WTA-I方案下变化,并证明150个尖峰神经元的网络可以有效地表示40个尖峰的对象。研究如何使用SNN中的生物学上合理的学习规则来研究核心对象识别,这不仅可能进一步我们对大脑的理解,而且还会导致新颖而有效的人工视觉系统。
translated by 谷歌翻译
Recent work has shown that large language models are capable of generating natural language reasoning steps or Chains-of-Thoughts (CoT) to answer a multi-step question when prompted to do so. This is insufficient, however, when the necessary knowledge is not available or up-to-date within a model's parameters. A straightforward approach to address this is to retrieve text from an external knowledge source using the question as a query and prepend it as context to the model's input. This, however, is also insufficient for multi-step QA where \textit{what to retrieve} depends on \textit{what has already been derived}. To address this issue we propose IRCoT, a new approach that interleaves retrieval with CoT for multi-step QA, guiding the retrieval with CoT and in turn using retrieved results to improve CoT. Our experiments with GPT3 show substantial improvements in retrieval (up to 22 points) and downstream QA (up to 16 points) over the baselines on four datasets: HotpotQA, 2WikiMultihopQA, MuSiQue, and IIRC. Notably, our method also works well for much smaller models such as T5-Flan-large (0.7B) without any additional training.
translated by 谷歌翻译
Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
Concept bottleneck models (CBMs) (Koh et al. 2020) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then predict the final label based on the concept label predictions.We extend CBMs to interactive prediction settings where the model can query a human collaborator for the label to some concepts. We develop an interaction policy that, at prediction time, chooses which concepts to request a label for so as to maximally improve the final prediction. We demonstrate thata simple policy combining concept prediction uncertainty and influence of the concept on the final prediction achieves strong performance and outperforms a static approach proposed in Koh et al. (2020) as well as active feature acquisition methods proposed in the literature. We show that the interactiveCBM can achieve accuracy gains of 5-10% with only 5 interactions over competitive baselines on the Caltech-UCSDBirds, CheXpert and OAI datasets.
translated by 谷歌翻译
Power grids, across the world, play an important societal and economical role by providing uninterrupted, reliable and transient-free power to several industries, businesses and household consumers. With the advent of renewable power resources and EVs resulting into uncertain generation and highly dynamic load demands, it has become ever so important to ensure robust operation of power networks through suitable management of transient stability issues and localize the events of blackouts. In the light of ever increasing stress on the modern grid infrastructure and the grid operators, this paper presents a reinforcement learning (RL) framework, PowRL, to mitigate the effects of unexpected network events, as well as reliably maintain electricity everywhere on the network at all times. The PowRL leverages a novel heuristic for overload management, along with the RL-guided decision making on optimal topology selection to ensure that the grid is operated safely and reliably (with no overloads). PowRL is benchmarked on a variety of competition datasets hosted by the L2RPN (Learning to Run a Power Network). Even with its reduced action space, PowRL tops the leaderboard in the L2RPN NeurIPS 2020 challenge (Robustness track) at an aggregate level, while also being the top performing agent in the L2RPN WCCI 2020 challenge. Moreover, detailed analysis depicts state-of-the-art performances by the PowRL agent in some of the test scenarios.
translated by 谷歌翻译
Generative models learned from training using deep learning methods can be used as priors in inverse under-determined inverse problems, including imaging from sparse set of measurements. In this paper, we present a novel hierarchical deep-generative model MrSARP for SAR imagery that can synthesize SAR images of a target at different resolutions jointly. MrSARP is trained in conjunction with a critic that scores multi resolution images jointly to decide if they are realistic images of a target at different resolutions. We show how this deep generative model can be used to retrieve the high spatial resolution image from low resolution images of the same target. The cost function of the generator is modified to improve its capability to retrieve the input parameters for a given set of resolution images. We evaluate the model's performance using the three standard error metrics used for evaluating super-resolution performance on simulated data and compare it to upsampling and sparsity based image sharpening approaches.
translated by 谷歌翻译
Modern Deep Learning (DL) models have grown to sizes requiring massive clusters of specialized, high-end nodes to train. Designing such clusters to maximize both performance and utilization to amortize their steep cost is a challenging task requiring careful balance of compute, memory, and network resources. Moreover, a plethora of each model's tuning knobs drastically affect the performance, with optimal values often depending on the underlying cluster's characteristics, which necessitates a complex cluster-workload co-design process. To facilitate the design space exploration of such massive DL training clusters, we introduce COMET a holistic cluster design methodology and workflow to jointly study the impact of parallelization strategies and key cluster resource provisioning on the performance of distributed DL training. We develop a step-by-step process to establish a reusable and flexible methodology, and demonstrate its application with a case study of training a Transformer-1T model on a cluster of variable compute, memory, and network resources. Our case study demonstrates COMET's utility in identifying promising architectural optimization directions and guiding system designers in configuring key model and cluster parameters.
translated by 谷歌翻译
Observational studies have recently received significant attention from the machine learning community due to the increasingly available non-experimental observational data and the limitations of the experimental studies, such as considerable cost, impracticality, small and less representative sample sizes, etc. In observational studies, de-confounding is a fundamental problem of individualised treatment effects (ITE) estimation. This paper proposes disentangled representations with adversarial training to selectively balance the confounders in the binary treatment setting for the ITE estimation. The adversarial training of treatment policy selectively encourages treatment-agnostic balanced representations for the confounders and helps to estimate the ITE in the observational studies via counterfactual inference. Empirical results on synthetic and real-world datasets, with varying degrees of confounding, prove that our proposed approach improves the state-of-the-art methods in achieving lower error in the ITE estimation.
translated by 谷歌翻译
因果和归因研究对于地球科学发现至关重要,对于为气候,生态和水政策提供信息至关重要。但是,当前的方法需要与科学和利益相关者挑战的复杂性以及数据可用性以及数据驱动方法的充分性相结合。除非通过物理学进行仔细的通知,否则它们会冒着将相关性与因果关系相关或因估计不准确而淹没的风险。鉴于自然实验,对照试验,干预措施和反事实检查通常是不切实际的,因此已经开发了信息理论方法,并在地球科学中不断完善。在这里,我们表明,基于转移熵的因果图最近在具有备受瞩目的发现的地球科学中变得流行,即使增强具有统计学意义,也可能是虚假的。我们开发了一种基于子样本的合奏方法,用于鲁棒性因果分析。模拟数据以及气候和生态水文中的观察表明,这种方法的鲁棒性和一致性。
translated by 谷歌翻译
多种业务场景需要从结构化输入数据中自动生成描述性的人类可读文本。因此,已经开发了针对各种下游任务的事实到文本的系统主要是由于相关数据集的高可用性。直到最近,提出了跨语言事实与文本(XF2T)的问题,该问题是针对多种语言的生成,以及一个数据集,Xalign的八种语言。但是,实际上XF2T生成问题没有严格的工作。我们使用另外四种语言的注释数据扩展了Xalign数据集:旁遮普语,马拉雅拉姆语,阿萨姆语和Oriya。我们在扩展的多语言数据集上使用基于变压器的流行文本生成模型进行了广泛的研究,我们称之为Xalignv2。此外,我们研究了不同文本生成策略的性能:预处理,事实感知的嵌入和结构意识的输入编码的多种变化。我们的广泛实验表明,使用具有结构意识的输入编码的事实感知的嵌入式的多语言MT5模型可以平均在十二种语言中获得最佳结果。我们将代码,数据集和模型公开可用,并希望这将有助于进一步在此关键领域进行进一步的研究。
translated by 谷歌翻译